Search Results: "gwolf"

10 April 2023

Gunnar Wolf: Twenty years

Twenty years A seemingly big, very round number, at least for me. I can recall several very well-known songs mentioning this timespan: A quick Internet search yields many more And yes, in human terms 20 years is quite a big deal. And, of course, I have been long waiting for the right time to write this post. Because twenty years ago, I got the mail. Of course, the mail notifying me I had successfully finished my NM process and, as of April 2003, could consider myself to be a full-fledged Debian Project member. Maybe by sheer chance it was today also that we spent the evening at Max s house I never worked directly with Max, but we both worked at Universidad Pedag gica Nacional at the same time back then. But Of course, a single twentyversary is not enough! I don t have the exact date, but I guess I might be off by some two or three months due to other things I remember from back then. This year, I am forty years old as an Emacs and TeX user! Back in 1983, on Friday nights, I went with my father to IIMAS (where I m currently adscribed to as a PhD student, and where he was a researcher between 1971 and the mid-1990s) and used the computer one of the two big computers they had in the Institute. And what could a seven-year-old boy do? Of course use the programs this great Foonly F2 system had. Emacs and TeX (this is still before LaTeX). 40 years And I still use the same base tools for my daily work, day in, day out.

25 March 2023

Gunnar Wolf: Four days to fix a simple configuration bug

Phew! Today, after four days of combing through code I am unfamiliar with, I was finally able to change my expression. I m finally at the part of my PhD work where I am tasked with implementing the protocol I claim improves from the current situation. I wrote a script to deploy the infrastructure I need for the experiment, and was not expecting any issues I am not (yet) familiar with the Go language (in which the Hockeypuck key server is developed), but I have managed to install it several times, and it holds no terrible surprises anymore for me. Or so I think. So, how come it s possible the five servers in my laboratory network don t gossip to each other? The logs don t show anything clear Only a sucession of this:
hockeypuck[5295]: time="2023-03-24T00:00:28-06:00" level=error msg="recon with :0 failed" error="[ /srv/hockeypuck/packaging/src/gopkg.in/hockeypuck/conflux.v2/recon/gossip.go:109:    dial tcp :0: connect: connection refused ]" label="gossip :11370"
hockeypuck[5295]: time="2023-03-24T00:00:28-06:00" level=info msg="waiting 27s for next gossip attempt" label="gossip :11370"
And while tcp :0: connect: connection refused sounds fishy It took me too long to find the reasons. But, at least, along the way I decided to find my errors by debugging the code, rather than by rebuilding the laboratory and random-stabbing at the configuration. And yes, finally I came to my senses, and found out my silly mistake was to have my configuration read:
[hockeypuck.conflux.recon.partner.10.0.3.13]
httpAddr="10.0.3.13:11371"
reconAddr="10.0.3.13:11370"
where it should have read:
[hockeypuck.conflux.recon.partner.10-0-3-13]
httpAddr="10.0.3.13:11371"
reconAddr="10.0.3.13:11370"
Because, of course, TOML would find no child declarations for hockeypuck.conflux.recon.partner.10 (as the following period makes the rest of the entry an entirely distinct one from what I thought I specified). Anyway, this made me at least: Now Why am I posting this? Not only because I feel very happy and wanted to share my a-ha moment, but also because I m sure this time that seems that I mindlessly spent poking at Go without knowing the basics will be somehow rewarded I have to learn bits of the language anyway, so it s time well spent. Or so I hope. (oh, and the funny spectacles? I am not sure, but I believe them to have been property of my grandfather or great-grandfather when they came from Europe, in 1947 or in 1928 respectively. One glasspiece is sadly lost, but other than that, I love them!)

Gunnar Wolf: Now that we are talking about kernel building... What about firebuild?

After my last post, B lint (who prompted it with his last post) suggested I should do a hybrid test of his tests and my extremes. He suggested I should build the Linux kernel using my Raspberry Pi 4 (8GB model), but using the Firebuild build accelerator. Before going any further: I must make clear that while Firebuild is freely redistributable, it is not made available under a free license. It is free for personal use or commercial trial, but otherwise requires licensing. B lint managed to build a Linux kernel in just over 8 seconds. So, how did my test go? My previous experiment, using -j 4, built Linux in ~100 minutes; this was about a year ago, and I m now building linux 6.1, so I timed this again. To get a baseline, I built my kernel from a just-unpacked tree, just as usual:
# cd /usr/src/linux-source-6.1
# make clean
# make defconfig
# time make -j4
(...)
real    117m30.588s
user    392m41.434s
sys     52m2.556s
Of course, having all of the object files built makes the rebuild process quite faster (this is still done without firebuild). I understand calling make defconfig without cleaning does not change much, but I saw it often referenced in firebuild s docs, so I m leaving it:
# time make -j 4
(...)
real    0m43.822s
user    1m36.577s
sys     0m40.805s
Then, I did a first run using firebuild. Firebuild is a caching build optimizer, so the first run will naturally be somewhat slower (but if you often rebuild your kernel, it should be seen as an investment). Now, in the Raspberry Pi, that uses a slow SD card interface for its storage It is a heavy investment. The first time I built with firebuild, it meant almost a 100% build time hit:
# cd /usr/src/linux-source-6.1
# make clean
# make defconfig
# time firebuild make -j 4
(...)
real    212m58.647s
user    391m49.080s
sys     81m10.758s
Not only that; I am using a fairly decent and big 32GB card, but this is quite a big price to pay in such a limited system!
# du -sh .cache/firebuild/
4.2G    .cache/firebuild/
I did a build without cleaning the build directory, using firebuild, and it does help although not by so much as in higher performance systems:
# cd /usr/src/linux-source-6.1
# make clean
# make defconfig
# time firebuild make -j 4
(...)
real    68m6.621s
user    98m32.514s
sys     31m41.643s
So, it built in roughly 65% of the time it would take to build regularly. And what about rebuilding without cleaning?
# make defconfig
# time firebuild make -j 4
(...)
real    1m11.872s
user    2m5.807s
sys     1m46.178s
In this case, using firebuild worked roughly 30% slower than not using it. I guess the high number of file ops inside .cache/firebuild are to blame, as in the case of the media I m using, those are quite expensive; make went its way basically checking date stamps between *.c and *.o (yes, very roughly), and while running under firebuild, I suppose each of these meant an extra lookup inside the cache. So Experiment requested, experiment performed!

21 March 2023

Gunnar Wolf: Impact of parallelism and processor architecture while building a kernel

Given that B lint just braggedblogged about how efficiently he can build a Linux kernel (less than 8 seconds, wow! Well, yes, until you read it is the result of aggressive caching and is achieved only for a second run), and that a question just popped up today on the Debian ARM mailing list, is an ARM computer a good choice? Which one? , I decided to share my results of an experiment I did several months ago, to graphically show to my students the effects of parallelism, the artifacts of hyperthreading, the effects of different architecture sets, and even illustrate about the actual futility of my experiment (somewhat referring to John Gustafson s reevaluation of Amdahl s law, already 30 years ago One does not take a fixed-size problem and run it on various numbers of processors except when doing academic research ; thanks for referring to my inconsequential reiterative compilations as academic research! ) I don t expect any of the following images to be groundbreaking, but at least, next time I need to find them it is quite likely I ll be able to find them and I will be able to more easily refer to them in online discussions So What did I do? I compiled Linux repeatedly, on several of the machines I had available, varying the -j flag (how many cores to use simultaneously), starting with single-core, and pushing up until just a bit over the physical number of cores the CPU has. Sadly, I lost several of my output images, but the three following are enough to tell interesting bits of the story: Of course, I have to add that this is not a scientific comparison; the server and my laptop have much better I/O than the Raspberry s puny micro-SD card (and compiling hundreds of thousands of files is quite an IO-stressed job, even though the full task does exhibit the very low compared single-threaded performance of the Raspberry even compared with the Yoga). No optimizations were done (they would be harmful to the effects I wanted to show!), the compile was made straight from the upstream sources.

16 February 2023

Gunnar Wolf: We are GREAT at handling multimedia!

I have mentioned several times in this blog, as well as by other communication means, that I am very happy with the laptop I bought (used) about a year and a half ago: an ARM-based Lenovo Yoga C630. Yes, I knew from the very beginning that using this laptop would pose a challenge to me in many ways, as full hardware support for ARM laptops are nowhere as easy as for plain boring x86 systems. But the advantages far outweigh the inconvenience (i.e. the hoops I had to jump through to handle video-out when I started teaching presentially, which are fortunately a thing of the past now). Anyway This post is not about my laptop. Back in 2018, I was honored to be appointed as a member of the Debian Technical Committee. Of course, that meant (due to the very clear and clever point 6.2.7.1 of the Debian Constitution that my tenure in the Committee (as well as Niko Tyni s) finished in January 1, 2023. We were invited to take part of a Jitsi call as a last meeting, as well as to welcome Matthew Garrett to the Committee. Of course, I arranged so I would be calling from my desktop system at work (for which I have an old, terrible webcam but as long as I don t need to control screen sharing too finely, mostly works). Out of eight people in the call, two had complete or quite crippling failures with their multimedia setup, and one had a frozen image (at least as far as I could tell). So Yes, Debian is indeed good and easy and simple and reliable for most nontechnical users using standard tools. But I guess that we power users enjoy tweaking our setup to our precise particular liking. Or that we just don t care about frivolities such as having a working multimedia setup. Or I don t know what happens. But the fact that close to half of the Technical Committee, which should consist of Debian Developers who know their way around technical obstacles, cannot get a working multimedia setup for a simple, easy WebRTC call (even after a pandemic that made us all work via teleconferencing solutions on a daily basis!) is just Beautiful

29 January 2023

Gunnar Wolf: miniDebConf Tamil Nadu 2023

Greetings from Viluppuram, Tamil Nadu, South India! As a preparation and warm-up for DebConf in September, the Debian people in India have organized a miniDebConf. Well, I don t want to be unfair to them They have been regularly organizing miniDebConfs for over a decade, and while most of the attendees are students local to this state in South India (the very tip of the country; Tamil Nadu is the Eastern side, and Kerala, where Kochi is and DebConf will be held, is the Western side), I have talked with attendees from very different regions of this country. This miniDebConf is somewhat similar to similarly-scoped events I have attended in Latin America: It is mostly an outreach conference, but it s also a great opportunity for DDs in India to meet in the famous hallway track. India is incredibly multicultural. Today at the hotel, I was somewhat surprised to see people from Kerala trying to read a text written in Tamil: Not only the languages are different, but the writing systems also are. From what I read, Tamil script is a bit simpler to Kerala s Mayalayam, although they come from similar roots. Of course, my school of thought is that, whenever you visit a city, culture or country that differs from the place you were born, a fundamental component to explore and to remember is Food! And one of the things I most looked forward for this trip was that precisely. I arrived to the Chennai Airport (MAA) 8:15 local time yesterday morning, so I am far from an expert but I have been given (and most happily received) three times biryani (pictured in the photo by this paragraph). It is delicious, although I cannot yet describe the borders of what should or should not be considered proper biryani): The base dish is rice, and you go mixing it with different sauces or foods. What managed to surprise us foreigners is, strangely, well known for us all: there is no spoon. No, the food is not pushed to your mouth using metal or wooden utensils. Not even using a tortilla as back home, or by breaking bits of the injera that serves also as a dish, as in Ethiopia. Sure, there is naan, but it is completely optional, and would be a bit too much for as big a big dish as what we have got. Biryani is eaten With the tools natural to us primates: the fingers. We have learnt some differnt techniques but so far, I am still using the base technique (thumb-finger-middle). I m closing the report with the photo of the closing of the conference as it happens. And I will, of course, share our adventures as they unfold in the next couple of days. Because Well, we finished with the conference-y part of the trip, but we have a full week of (pre-)DebConf work ahead of us!

16 January 2023

Gunnar Wolf: Back to Understanding Computers and Cognition

As many of you know, I work at UNAM, Mexico s largest university. My work is split in two parts: My full-time job is to be the systems and network administrator at the Economics Research Institute, and I do some hours of teaching at the Engineering Faculty. At the Institute, my role is academic but although I have tried to frame my works in a way amenable to analysis grounded on the Social Sciences (Construcci n Colaborativa del Conocimiento, Hecho con Creative Commons, Mecanismos de privacidad y anonimato), so far, I have not taken part of academic collaboration with my coworkers Economics is a field very far from my interests, to somehow illustrate it. I was very happy when I was invited to be a part of a Seminar on The Digital Economy in the age of Artificial Intelligence . I talked with the coordinator, and we agreed we have many Economic Science experts but understanding what does Artificial Intelligence mean eludes then, so I will be writing one of the introductory chapters to this analysis. But Hey, I m no expert in Artificial Intelligence. If anything, I could be categorized as an AI-skeptical! Well, at least I might be the closest thing at hand in the Institute So I have been thinking about what I will be writing, and finding and reading material to substantiate what I ll be writing. One of the readings I determined early on I would be going back to is Terry Winograd and Fernando Flores 1986 book, Understanding Computers and Cognition: A New Foundation for Design (oh, were you expecting a link to buy it instead of reading it online?) I first came across this book by mere chance. Back in the last day of year 2000, my friend Ariel invited me and my then-girlfriend to tag along and travel by land to the USA to catch some just-after-Christmas deals in San Antonio. I was not into shopping, but have always enjoyed road trips, so we went together. We went, yes, to the never-ending clothing shops, but we also went to some big libraries And the money I didn t spend at other shops, I spent there. And then some more. There was a little, somewhat oldish book that caught my eye. And I ll be honest: I looked at this book only because it was obviously produced by the LaTeX typesetting system (the basics of which I learnt in 1983, and with which I have written well, basically everything substantial I ve ever done). I remember I read this book with most interest back in that year, and finished it with a Wow, that was a strange trip! And Although I have never done much that could be considered AI-related, this has always been my main reference. But not for explaining what is a perceptron, how is an expert system to ponder the weight of given information, or whether a neural network is convolutional or recurrent, or how to turn from a network trained to recognize feature x into a generational network. No, our book is not technical. Well Not in that sense. This book tackles cognition. But in order to discuss cognition, it must first come to a proper definition of it. And to do so, it has to base itself on philosophy, starting by noting the author s disagreement with what they term as the rationalistic tradition: what we have come to term valid reasoning in Western countries. Their main claim is that the rationalistic tradition cannot properly explain a process as complex as cognition (how much bolder can you be than proposing something like this?). So, this book presents many constructs of Heidggerian origin, aiming to explain what it is understanding and being. In doing so, it follows Humberto Maturana s work. Maturana is also a philosopher, but comes from a background in biology he published works on animal neurophysiology that are also presented here. Writing this, I must ensure you I am not a philosopher, and lack field-specific knowledge to know whether this book is so unique. I know from the onset it does not directly help me to write the chapter I will be writing (but it will surely help me write some important caveats that will make the chapter much more interesting and different to what anybody with a Web browser could write about artificial intelligence). One last note: Although very well written, and notable for bringing hard to grasp concepts to mere technical staff as myself, this is not light, easy reading. I started re-reading this book a couple of weeks ago, and have just finished chapter 5 (page 69). As some reviewers state, this is one of those books you have to go back a paragraph or two over and over. But it is a most enjoyable and interesting reading.

9 January 2023

Gunnar Wolf: Back to Xochicalco

In Mexico, we have the great luck to live among vestiges of long-gone cultures, some that were conquered and in some way got adapted and survived into our modern, mostly-West-Europan-derived society, and some that thrived but disappeared many more centuries ago. And although not everybody feels the same way, in my family we have always enjoyed visiting archaeological sites when I was a child and today. Some of the regulars that follow this blog (or its syndicators) will remember Xochicalco, as it was the destination we chose for the daytrip back in the day, in DebConf6 (May 2006). This weekend, my mother suggested us to go there, as being Winter, the weather is quite pleasant we were at about 25 C, and by the hottest months of the year it can easily reach 10 more; the place lacks shadows, like most archaeological sites, and it does get quite tiring nevertheless! Xochicalco is quite unique among our archaeological sites, as it was built as a conference city: people came from cultures spanning all of Mesoamerica to debate and homogeneize the calendars used in the region. The first photo I shared here is by the Quetzalc atl temple, where each of the four sides shows people from different cultures (the styles in which they are depicted follow their local self-representations), encodes equivalent dates in the different calendaric systems, and are located along representationsof the God of knowledge, the feathered serpent, Quetzalc atl. It was a very nice day out. And, of course, it brought back memories of my favorite conference visiting the site of a very important conference

8 January 2023

Antoine Beaupr : 20 years blogging

Many folks have woken up to the dangers of commercialization and centralisation of this very fine internet we have around here. For many of us, of course, it's one big "I told you so"... (To fair, I stopped "telling you so" because evangelism is pretty annoying. It's certainly dishonest coming from an atheist, so I preach by example now. I often wonder what works better. But I digress.) Colleagues have been posting about getting back into blogging. This post from gwolf, in particular, reviews his yearly blog output, and that made me wonder how that looked like from my end. The answer, of course, is simple to generate:
anarcat@angela:~$ cd anarc.at
/home/anarcat/wikis/anarc.at
anarcat@angela:anarc.at$ ls blog   grep '^[0-9][0-9][0-9][0-9]'   sed s/-.*//   sort   uniq -c    sort -n -k2
     62 2005
     49 2006
     26 2007
     25 2008
      8 2009
     16 2010
     24 2011
     19 2012
     17 2013
      7 2014
     19 2015
     32 2016
     43 2017
     40 2018
     27 2019
     33 2020
     22 2021
     45 2022
      1 2023
(I thought of drawing this as a sparkline but it looks like Sparklines are kind of dead. https://sparkline.org/ doesn't resolve and the canonical PHP package is gone from Debian. The plugin is broken in ikiwiki anyway...) So it seems like I've been doing this very blog for 18 years, and it's not even my first blog. I actually started in 2003, which makes this year my 20-year blogging anniversary. (And even if that sounds really old, note that I was not actually an early adopter. Jorn Barger having coined the term "weblog" in 1997. Yes, in another millenia.) Reading back some of the headlines in those older posts, I have definitely changed style. I used to write shorter, more random ideas, and more often. I somehow managed to write more than one article per week in 2005! Now, I take more time writing, a habit I picked up while writing for LWN (those articles), which started in 2016. But interestingly, it seems I started producing more articles then: I hit 43 articles per year in 2017, my fourth best year ever. The best years in terms of numbers are the first two years (2005 and 2006, I didn't check the numbers on earlier years), but I doubt they are the best in terms of content. Really, the best writing I have ever did was for LWN. I dare hope I have kept the quality I was encouraged (forced?) to produce, but I know I cannot come anywhere close to what the LWN editors were juicing out of me. You can also see that I immediately dropped to a more normal 27 article in 2019, once I stopped writing for LWN... Back when I started blogging, my writing was definitely more personal. I had less concerns about privacy back then; now, I would never write about my personal life like I did back then (e.g. "I have a cold"). I was also writing mostly in French back then, and it's sad to think that I am rarely writing in my native language here anymore. I guess that brings me an international audience, which is simultaneously challenging, gratifying, and terrifying. But it also means I reach out to people that do not speak English (or French, for that matter) as their first language. For me that is more valuable than catering to my little corner of culture, at least for now, and especially when writing technical topics, which is most of what I do now anyways. Interestingly, I wrote a lot in 2022. People sometimes ask me how I manage to write so much: I don't actually know. I don't have a lot of free time on my hand, and even less than before in the past two years, but somehow I keep feeding this blog. I guess I must have something to say. Can't quite figure out what yet, so maybe I should just keep trying. Or, if you're new to this Internet thing, Bring Back Blogging! Whee! PS: I wish I had time to do a review of my visitors like i did for 2021 but time is missing.

2 January 2023

Gunnar Wolf: Refueling the blog

So, it s this weird time of year where we make a balance and share with the world some ideas about the future. And yes, it s time to take care of this blog, as its activity has dropped once again. So maybe it d be nice to start this post by checking how much have I blogged over the years: (yes, this is an old blog) So yes, there is a clear downwards trend towards the last few years. And it does make sense, all in all: Not only have I managed to keep myself busier than before, but Blogging is a social endeavor. And as people have moved over to the different flavors of social networks, there is somewhat less fueling us to share our thoughts and experiences in this fashion. So this connects me to my first point: Staring at Noodles Emptiness, I got to a campaign to Bring Back Blogging. I stand by all of what they suggest: Blogs are a great invention, they allow the sharing of a great insight into a person s mind, ideas and worldview (and even more so if, like mine, it shows already a window of almost two decades of life! This year my blog will be old enough to vote!), they are completely decentralized, and can be easily grouped according to each readers preferences via the RSS format. Anyway I do want to write a post summing up 2022, as well as sharing some hopes and projects I have for 2023. But I don t want to make it too long to read. So That shall be the blog post for today! (post caption image by Walt Stoneburner on Flickr; CC BY 2.0)

14 October 2022

Gunnar Wolf: Learning some Rust with Lars!

A couple of weeks ago, I read a blog post by former Debian Developer Lars Wirzenius offering a free basic (6hr) course on the Rust language to interested free software and open source software programmers. I know Lars offers training courses in programming, and besides knowing him for ~20 years and being proud to consider us to be friends, have worked with him in a couple of projects (i.e. he is upstream for vmdb2, which I maintain in Debian and use for generating the Raspberry Pi Debian images) He is a talented programmer, and a fun guy to be around. I was admitted to the first cohort of students of this course (please note I m not committing him to run free courses ever again! He has said he would consider doing so, specially to offer a different time better suited for people in Asia). I have wanted to learn some Rust for quite some time. About a year ago, I bought a copy of The Rust Programming Language, the canonical book for learning the language, and started reading it But lacked motivation and lost steam halfway through, and without having done even a simple real project beyond the simple book exercises. How has this been? I have enjoyed the course. I must admit I did expect it to be more hands-on from the beginning, but Rust is such a large language and it introduces so many new, surprising concepts. Session two did have two somewhat simple hands-on challenges; by saying they were somewhat simple does not mean we didn t have to sweat to get them to compile and work correctly! I know we will finish this Saturday, and I ll still be a complete newbie to Rust. I know the only real way to wrap my head around a language is to actually have a project that uses it And I have some ideas in mind. However, I don t really feel confident to approach an already existing project and start meddling with it, trying to contribute. What does Rust have that makes it so different? Bufff Variable ownership (borrow checking) and values lifetimes are the most obvious salient idea, but they are relatively simple, as you just cannot forget about them. But understanding (and adopting) idiomatic constructs such as the pervasive use of enums, understanding that errors always have to be catered for by using expect() and Result<T,E> It will take some time to be at ease developing in it, if I ever reach that stage! Oh, FWIW Interested related reading. I am halfway through an interesting article, published in March in the Communications of the ACM magazine, titled Here We Go Again: Why Is It Difficult for Developers to Learn Another Programming Language? , that presents an interesting point we don t always consider: If I m a proficient programmer in the X programming language and want to use the Y programming language, learning it Should be easier for me than for the casual bystander, or not? After all, I already have a background in programming! But it happens that mental constructs we build for a given language might hamper our learning of a very different one. This article presents three interesting research questions:
  1. Does cross-language interference occur?
  2. How do experienced programmers learn new languages?
  3. What do experienced programmers find confusing in new languages?
I m far from reaching the conclusions, but so far, it s been a most interesting read. Anyway, to wrap up Thanks Lars! I am learning (although at a pace that is not magically quick But I am aware of the steep learning curve of the language) quite a bit of a very interesting topic, and I m also enjoying the time I spend in front of my computer on Saturday.

27 September 2022

Steve McIntyre: Firmware again - updates, how I'm voting and why!

Updates Back in April I wrote about issues with how we handle firmware in Debian, and I also spoke about it at DebConf in July. Since then, we've started the General Resolution process - this led to a lot of discussion on the the debian-vote mailing list and we're now into the second week of the voting phase. The discussion has caught the interest of a few news sites along the way: My vote I've also had several people ask me how I'm voting myself, as I started this GR in the first place. I'm happy to oblige! Here's my vote, sorted into preference order:
  [1] Choice 5: Change SC for non-free firmware in installer, one installer
  [2] Choice 1: Only one installer, including non-free firmware
  [3] Choice 6: Change SC for non-free firmware in installer, keep both installers
  [4] Choice 2: Recommend installer containing non-free firmware
  [5] Choice 3: Allow presenting non-free installers alongside the free one
  [6] Choice 7: None Of The Above
  [7] Choice 4: Installer with non-free software is not part of Debian
Why have I voted this way? Fundamentally, my motivation for starting this vote was to ask the project for clear positive direction on a sensible way forward with non-free firmware support. Thus, I've voted all of the options that do that above NOTA. On those terms, I don't like Choice 4 here - IMHO it leaves us in the same unclear situation as before. I'd be happy for us to update the Social Contract for clarity, and I know some people would be much more comfortable if we do that explicitly here. Choice 1 was my initial personal preference as we started the GR, but since then I've been convinced that also updating the SC would be a good idea, hence Choice 5. I'd also rather have a single image / set of images produced, for the two reasons I've outlined before. It's less work for our images team to build and test all the options. But, much more importantly: I believe it's less likely to confuse new users. I appreciate that not everybody agrees with me here, and this is part of the reason why we're voting! Other Debian people have also blogged about their voting choices (Gunnar Wolf and Ian Jackson so far), and I thank them for sharing their reasoning too. For the avoidance of doubt: my goal for this vote was simply to get a clear direction on how to proceed here. Although I proposed Choice 1 (Only one installer, including non-free firmware), I also seconded several of the other ballot options. Of course I will accept the will of the project when the result is announced - I'm not going to do anything silly like throw a tantrum or quit the project over this! Finally If you're a DD and you haven't voted already, please do so - this is an important choice for the Debian project.

23 September 2022

Gunnar Wolf: 6237415

Years ago, it was customary that some of us stated publicly the way we think in time of Debian General Resolutions (GRs). And even if we didn t, vote lists were open (except when voting for people, i.e. when electing a DPL), so if interested we could understand what our different peers thought. This is the first vote, though, where a Debian vote is protected under voting secrecy. I think it is sad we chose that path, as I liken a GR vote more with a voting process within a general assembly of a cooperative than with a countrywide voting one; I feel that understanding who is behind each posture helps us better understand the project as a whole. But anyway, I m digressing Even though I remained quiet during much of the discussion period (I was preparing and attending a conference), I am very much interested in this vote I am the maintainer for the Raspberry Pi firmware, and am a seconder for two of them. Many people know me for being quite inflexible in my interpretation of what should be considered Free Software, and I m proud of it. But still, I believer it to be fundamental for Debian to be able to run on the hardware most users have. So My vote was as follows:
[6] Choice 1: Only one installer, including non-free firmware
[2] Choice 2: Recommend installer containing non-free firmware
[3] Choice 3: Allow presenting non-free installers alongside the free one
[7] Choice 4: Installer with non-free software is not part of Debian
[4] Choice 5: Change SC for non-free firmware in installer, one installer
[1] Choice 6: Change SC for non-free firmware in installer, keep both installers
[5] Choice 7: None Of The Above
For people reading this not into Debian s voting processes: Debian uses the cloneproof Schwatz sequential dropping Condorcet method, which means we don t only choose our favorite option (which could lead to suboptimal strategic voting outcomes), but we rank all the options according to our preferences. To read this vote, we should first locate position of None of the above , which for my ballot is #5. Let me reorder the ballot according to my preferences:
[1] Choice 6: Change SC for non-free firmware in installer, keep both installers
[2] Choice 2: Recommend installer containing non-free firmware
[3] Choice 3: Allow presenting non-free installers alongside the free one
[4] Choice 5: Change SC for non-free firmware in installer, one installer
[5] Choice 7: None Of The Above
[6] Choice 1: Only one installer, including non-free firmware
[7] Choice 4: Installer with non-free software is not part of Debian
This is, I don t agree either with Steve McIntyre s original proposal, Choice 1 (even though I seconded it, this means, I think it s very important to have this vote, and as a first proposal, it s better than the status quo maybe it s contradictory that I prefer it to the status quo, but ranked it below NotA. Well, more on that when I present Choice 5). My least favorite option is Choice 4, presented by Simon Josefsson, which represents the status quo: I don t want Debian not to have at all an installer that cannot be run on most modern hardware with reasonably good user experience (i.e. network support or the ability to boot at all!) Slightly above my acceptability threshold, I ranked Choice 5, presented by Russ Allbery. Debian s voting and its constitution rub each other in interesting ways, so the Project Secretary has to run the votes as they are presented but he has interpreted Choice 1 to be incompatible with the Social Contract (as there would no longer be a DFSG-free installer available), and if it wins, it could lead him to having to declare the vote invalid. I don t want that to happen, and that s why I ranked Choice 1 below None of the above.
[update/note] Several people have asked me to back that the Secretary said so. I can refer to four mails: 2022.08.29, 2022.08.30, 2022.09.02, 2022.09.04.
Other than that, Choice 6 (proposed by Holger Levsen), Choice 2 (proposed by me) and Choice 3 (proposed by Bart Martens) are very much similar; the main difference is that Choice 6 includes a modification to the Social Contract expressing that:
The Debian official media may include firmware that is otherwise not
part of the Debian system to enable use of Debian with hardware that
requires such firmware.
I believe choices 2 and 3 to be mostly the same, being Choice 2 more verbose in explaining the reasoning than Choice 3. Oh! And there are always some more bits to the discussion For example, given they hold modifications to the Social Contract, both Choice 5 and Choice 6 need a 3:1 supermajority to be valid. So, lets wait until the beginning of October to get the results, and to implement the changes they will (or not?) allow. If you are a Debian Project Member, please vote!

30 May 2022

Gunnar Wolf: On to the next journey

Last Wednesday my father, Kurt Bernardo Wolf Bogner, took the steps towards his next journey, the last that would start in this life. I cannot put words to this so just sharing this with the world will have to suffice. Goodbye to my teacher, my friend, the person I have always looked up to. Some of his friends were able to put in words more than what I can come up with. If you can read Spanish, you can read the eulogy from the Science Academy of Morelos. His last project, enjoyable by anybody who reads Spanish, is the book with his account of his youth travels Asia, Africa and Europe, beteen 1966 and 1970. You can read it online. And I have many printed copies, in case you want one as well. We will always remember you with love.

18 May 2022

Gunnar Wolf: I do have a full face

I have been a bearded subject since I was 18, back in 1994. Yes, during 1999-2000, I shaved for my military service, and I briefly tried the goatee look in 2008 Few people nowadays can imagine my face without a forest of hair. But sometimes, life happens. And, unlike my good friend Bdale, I didn t get Linus to do the honors But, all in all, here I am: Turns out, I have been suffering from quite bad skin infections for a couple of years already. Last Friday, I checked in to the hospital, with an ugly, swollen face (I won t put you through that), and the hospital staff decided it was in my best interests to trim my beard. And then some more. And then shave me. I sat in the hospital for four days, getting soaked (medical term) with antibiotics and otherstuff, got my recipes for the next few days, and well, I really hope that s the end of the infections. We shall see! So, this is the result of the loving and caring work of three different nurses. Yes, not clean-shaven (I should not trim it further, as shaving blades are a risk of reinfection). Anyway I guess the bits of hair you see over the place will not take too long to become a beard again, even get somewhat respectable. But I thought some of you would like to see the real me PS- Thanks to all who have reached out with good wishes. All is fine!

3 May 2022

Gunnar Wolf: Using a RPi as a display adapter

Almost ten months ago, I mentioned on this blog I bought an ARM laptop, which is now my main machine while away from home a Lenovo Yoga C630 13Q50. Yes, yes, I am still not as much away from home as I used to before, as this pandemic is still somewhat of a thing, but I do move more. My main activity in the outside world with my laptop is teaching. I teach twice a week, and well, having a display for my slides and for showing examples in the terminal and such is a must. However, as I said back in August, one of the hardware support issues for this machine is:
No HDMI support via the USB-C displayport. While I don t expect
to go to conferences or even classes in the next several months,
I hope this can be fixed before I do. It s a potential important
issue for me.
It has sadly not yet been solved While many things have improved since kernel 5.12 (the first I used), the Device Tree does not yet hint at where external video might sit. So, I went to the obvious: Many people carry different kinds of video adaptors I carry a slightly bulky one: A RPi3 For two months already (time flies!), I had an ugly contraption where the RPi3 connected via Ethernet and displayed a VNC client, and my laptop had a VNC server. Oh, but did I mention My laptop works so much better with Wayland than with Xorg that I switched, and am now a happy user of the Sway compositor (a drop-in replacement for the i3 window manager). It is built over WLRoots, which is a great and (relatively) simple project, but will thankfully not carry some of Gnome or KDE s ideas not even those I d rather have. So it took a bit of searching; I was very happy to find WayVNC, a VNC server for wlroot-sbased Wayland compositors. I launched a second Wayland, to be able to have my main session undisturbed and present only a window from it. Only that VNC is slow and laggy, and sometimes awkward. So I kept searching for something better. And something better is, happily, what I was finally able to do! In the laptop, I am using wf-recorder to grab an area of the screen and funnel it into a V4L2 loopback device (which allows it to be used as a camera, solving the main issue with grabbing parts of a Wayland screen):
/usr/bin/wf-recorder -g '0,32 960x540' -t --muxer=v4l2 --codec=rawvideo --pixelformat=yuv420p --file=/dev/video10
(yes, my V4L2Loopback device is set to /dev/video10). You will note I m grabbing a 960 540 rectangle, which is the top of my screen (1920x1080) minus the Waybar. I think I ll increase it to 960 720, as the projector to which I connect the Raspberry has a 4 3 output. After this is sent to /dev/video10, I tell ffmpeg to send it via RTP to the fixed address of the Raspberry:
/usr/bin/ffmpeg -i /dev/video10 -an -f rtp -sdp_file /tmp/video.sdp rtp://10.0.0.100:7000/
Yes, some uglier things happen here. You will note /tmp/video.sdp is created in the laptop itself; this file describes the stream s metadata so it can be used from the client side. I cheated and copied it over to the Raspberry, doing an ugly hardcode along the way:
user@raspi:~ $ cat video.sdp
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 10.0.0.100
t=0 0
a=tool:libavformat 58.76.100
m=video 7000 RTP/AVP 96
b=AS:200
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1
People familiar with RTP will scold me: How come I m streaming to the unicast client address? I should do it to an address in the 224.0.0.0 239.0.0.0 range. And it worked, sometimes. I switched over to 10.0.0.100 because it works, basically always Finally, upon bootup, I have configured NoDM to start a session with the user user, and dropped the following in my user s .xsession:
setterm -blank 0 -powersave off -powerdown 0
xset s off
xset -dpms
xset s noblank
mplayer -msglevel all=1 -fs /home/usuario/video.sdp
Anyway, as a result, my students are able to much better follow the pace of my presentation, and I m able to do some tricks better (particularly when it requires quick reaction times, as often happens when dealing with concurrency and such issues). Oh, and of course in case it s of interest to anybody, knowing that SD cards are all but reliable in the long run, I wrote a vmdb2 recipe to build the images. You can grab it here; it requires some local files to be present to be built some are the ones I copied over above, and the other ones are surely of no interest to you (such as my public ssh key or such :-] ) What am I still missing? (read: Can you help me with some ideas? ) Of course, this is a blog post published to brag about my stuff, but also to serve me as persistent memory in case I need to recreate this

7 April 2022

Gunnar Wolf: How is the free firmware for the Raspberry progressing?

Raspberry Pi computers require a piece of non-free software to boot the infamous raspi-firmware package. But for almost as long as there has been a Raspberry Pi to talk of (this year it turns 10 years old!), there have been efforts to get it to boot using only free software. How is it progressing? Michael Bishop (IRC user clever) explained today in the #debian-raspberrypi channel in OFTC that it advances far better than what I expected: It is even possible to boot a usable system under the RPi2 family! Just There is somewhat incomplete hardware support: For his testing, he has managed to use a xfce environment but over the composite (NTSC) video output, as HDMI initialization support is not there. However, he shared with me several interesting links and videos, and I told him I d share them there are still many issues; I do not believe it is currently worth it to make Debian images with this firmware. Before anything else: Go visit the librerpi/lk-overlay repository. Its README outlines hardware support for each of the RPi families; there is a binary build available with nixos if you want to try it out, and instructions to build it. But what clever showed me that made me write this post Is the amount of stuff you can do with the RPi s VPU (why Vision Vector Processing Unit and not the more familiar GPU, Graphical Processing Unit? I don t really know But I trust clever s definitions beyond how I trust my own ) before it loads an opearting system: There s not too much I can add to this. I was just Truly amazed. And I hope to see the remaining hurdles for regular Linux booting on this range of machines with purely free software quickly go away! Packaging this for Debian? Well, not yet not so fast I first told clever we could push this firmware to experimental instead of unstable, as it is not yet ready for most production systems. However, pabs made some spot-on further questions. And yes, it requires installing three(!) different cross-compilers, one of which vc4-toolchain, for the VPU is free software, but not yet upstreamed, and hence is not available for Debian. Anyway, the talk continued long after I had to go. I have gone a bit over the backlog, but I have to leave now so that will be it as for this blog post

21 March 2022

Gunnar Wolf: Long, long, long live Emacs after 39 years

Reading Planet Debian (see, Sam, we are still having a conversation over there? ), I read Anarcat s 20+ years of Emacs. And.. Well, should I brag contribute to the discussion? Of course, why not? Emacs is the first computer program I can name that I ever learnt to use to do something minimally useful. 39 years ago.
From the Space Cadet keyboard that (obviously ) influenced Emacs early design
The Emacs editor was born, according to Wikipedia, in 1976, same year as myself. I am clearly not among its first users. It was already a well-established citizen when I first learnt it; I am fortunate to be the son of a Physics researcher at UNAM, My father used to take me to his institute after he noticed how I was attracted to computers; we would usually spend some hours there between 7 and 11PM on Friday nights. His institute had a computer room where they had very sweet gear: Some 10 Heathkit terminals quite similar to this one: The terminals were connected (via individual switches) to both a PDP-11 and a Foonly F2 computers. The room also had a beautiful thermal printer, a beautiful Tektronix vectorial graphics output terminal, and some other stuff. The main user for my father was to typeset some books; he had recently (1979) published Integral Transforms in Science and Engineering (that must be my first mention in scientific literature), and I remember he was working on the proceedings of a conference he held in Oaxtepec (the account he used in the system was oax, not his usual kbw, which he lent me). He was also working on Manual de Lenguaje y Tipograf a Cient fica en Castellano, where you can see some examples of TeX; due to a hardware crash, the book has the rare privilege of being a direct copy of the output of the thermal printer: It was not possible to produce a higher resolution copy for several years But it is fun and interesting to see what we were able to produce with in-house tools back in 1985! So, what could he teach me so I could use the computers while he worked? TeX, of course. No, no LaTeX (that was published in 1984). LaTeX is a set of macros developed initially by Leslie Lamport, used to make TeX easier; TeX was developed by Donald Knuth, and if I have this information correct, it was Knuth himself who installed and demonstrated TeX in the Foonly computer, during a visit to UNAM. Now, after 39 years hammering at Emacs buffers Have I grown extra fingers? Nope. I cannot even write decent elisp code, and can barely read it. I do use org-mode (a lot!) and love it; I have written basically five books, many articles and lots of presentations and minor documents with it. But I don t read my mail or handle my git from Emacs. I could say, I m a relatively newbie after almost four decades. Four decades When we got a PC in 1986, my father got the people at the Institute to get him memacs (micro-emacs). There was probably a ten year period I barely used any emacs, but always recognized it. My fingers hve memorized a dozen or so movement commands, and a similar number of file management commands. And yes, Emacs and TeX are still the main tools I use day to day.

17 March 2022

Gunnar Wolf: Speaking about the OpenPGP WoT on LibrePlanet this Saturday

So, LibrePlanet, the FSF s conference, is coming! I much enjoyed attending this conference in person in March 2018. This year I submitted a talk again, and it got accepted of course, given the conference is still 100% online, I doubt I will be able to go 100% conference-mode (I hope to catch a couple of other talks, but well, we are all eager to go back to how things were before 2020!)

Anyway, what is my talk about? My talk is titled Current challenges for the OpenPGP keyserver network. Is there a way forward?. The abstract I submitted follows:
Many free projects use OpenPGP encryption or signatures for various important tasks, like defining membership, authenticating participation, asserting identity over a vote, etc. The Web-of-Trust upon which its operation is based is a model many of us hold dear, allowing for a decentralized way to assign trust to the identity of a given person. But both the Web-of-Trust model and the software that serves as a basis for the above mentioned uses are at risk due to attacks on the key distribution protocol (not on the software itself!) With this talk, I will try to bring awareness to this situation, to some possible mitigations, and present some proposals to allow for the decentralized model to continue to thrive towards the future.
I am on the third semester of my PhD, trying to somehow keep a decentralized infrastructure for the OpenPGP Web of Trust viable and usable for the future. While this is still in the early stages of my PhD work (and I still don t have a solution to present), I will talk about what the main problems are and will sketch out the path I intend to develop. What is the relevance? Mainly, I think, that many free software projects use the OpenPGP Web of Trust for their identity definitions Are we anachronistic? Are we using tools unfit for this century? I don t think so. I think we are in time to fix the main sore spots for this great example of a decentralized infrastructure.

When is my talk scheduled? This Saturday, 2022.03.19, at
GMT / UTC time
19:25 20:10
Conference schedule time (EDT/GMT-4)
15:25 16:10
Mexico City time (GMT-6)
13:25 14:10

How to watch it? The streams are open online. I will be talking in the Saturn room, feel free to just appear there and watch! The FSF asks people to [register for the conference](https://my.fsf.org/civicrm/event/info?reset=1&id=99) beforehand, in order to be able to have an active participation (i.e. ask questions and that). Of course, you might be interested in other talks take a look at the schedule! LibrePlanet keeps a video archive of their past conferences, and this talk will be linked from there. Of course, I will link to the recording once it is online. Update: As of 2022.03.30, LibrePlanet has posted the videos for all of their talks, all linked from the program. And of course, for convenience, I copied the talk over here: Current challenges for the OpenPGP keyserver network. Is there a way forward?

13 February 2022

Gunnar Wolf: Got to boot a RPi Zero 2 W with Debian

About a month ago, I got tired of waiting for the newest member of the Raspberry product lineup to be sold in Mexico, and I bought it from a Chinese reseller through a big online shopping platform. I paid quite a bit of premium (~US$85 instead of the advertised US$15), and got it delivered ten days later Anyway, it s known this machine does not yet boot mainline Linux. The vast majority of ARM systems require the bootloader to load a Device Tree file, presenting the hardware characteristics map. And while the RPi Zero 2 W (hey what an awful and confusing naming scheme they chose!) is mostly similar to a RPi3B+, it is not quite the same thing. A kernel with RPi3B+ s device tree will refuse to boot. Anyway, I started digging, and found that some days ago Stephan Wahren sent a patch to the linux-arm-kernel mailing list with a matching device tree. Read the patch! It s quite simple to read (what is harder is to know where each declaration should go, if you want to write your own, of course). It basically includes all basic details for the main chip in the RPi3 family (BCM2837), pulls in also the declarations from the BCM2836 present in the RPi2, and adds the necessary bits for the USB OTG connection and the WiFi and Bluetooth declarations. Registers the model name as Raspberry Pi Zero 2 W, which you can easily see in the following photo, informs the kernel it has 512MB RAM, and Well, really, it s an easy device tree to read, don t be shy! So, I booted my RPi 3B+ with a freshly downloaded Bookworm image, installed and unpacked linux-source-5.15, applied Stephan s patch, and added the following for the DTB to be generated in the arm64 tree as well:
--- /dev/null   2022-01-26 23:35:40.747999998 +0000
+++ arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dts       2022-02-13 06:28:29.968429953 +0000
@@ -0,0 +1 @@
+#include "arm/bcm2837-rpi-zero-2-w.dts"
Then, ran a simple make dtbs, and Failed, because bcm283x-rpi-wifi-bt.dtsi is not yet in the kernel . OK, no worries: Getting wireless to work is a later step. I commented out the lines causing conflict (10, 33-35, 134-136), and:
root@rpi3-20220212:/usr/src/linux-source-5.15# make dtbs
  DTC     arch/arm64/boot/dts/broadcom/bcm2837-rpi-zero-2-w.dtb
Great! Just copied over that generated file to /boot/firmware/, moved the SD over to my RPiZ2W, and behold! It boots! When I bragged about it in #debian-raspberrypi, steev suggested me to pull in the WiFi patch, that has also been submitted (but not yet accepted) for kernel inclusion. I did so, uncommented the lines I modified, and built again. It builds correctly, and again copied the DTB over. It still does not find the WiFi; dmesg still complains about a missing bit of firmware (failed to load brcm/brcmfmac43430b0-sdio.raspberrypi,model-zero-2-w.bin). Steev pointed out it can be downloaded from RPi Distro s GitHub page, but I called it a night and didn t pursue it any further ;-) So I understand this post is still a far cry from saying our images properly boot under a RPi 0 2 W , but we will get there

Next.

Previous.